Chapter 10: Exercises

Part 1

  1. 假設我們的數學模型如下:
    y = axp+bxq
    而資料點如下:
    x12345678910
    y2144390160255378530715930

    試用下述兩種方法求出參數 a、b、p、q 的最佳值,並算出最小平方誤差總和。

    1. 將 a、b、p、q 全視為非線性參數,以 fminsearch 來求出最佳參數值。
    2. 將 a、b 視為線性參數,使用「左除」來計算之;並將 p、q 視為非線性參數,使用 fminsearch 來求出其最佳值。
  2. 在進行最小平方法時,為何要取平方誤差而不是直接取絕對值誤差?
    Ans:因為我們可以藉由對總平方誤差的微分而得到一組線性方程式,進而得到一組解析解。若是採用絕對值誤差,則微分後並無法得到線性方程式,因此不容易求解。
  3. 對一指數函數 $y=ae^{bx}$ 求最小的擬合誤差,下列何者不可直接使用?
    1. 變形法 + 最小平方法
    2. 混成法
    3. 最小平方法
    4. Simplex 下坡式搜尋
  4. 變形法的目的是將一個非線性的數學模型,轉換成只包含線性參數的數學模型,以方便進行資料擬合。請問進行轉換後,目標模型的一般形式為何?請使用一個方程式來表示。
    Ans: $f(X, y)= \alpha_1*g_1(X, y) + \alpha_2*g_2(X, y) + \cdots + \alpha_n*g_n(X, y)$,其中 $n$ 是可變參數的個數,必須等於原模型的可變參數個數。
  5. 請從 MATLAB 讀入太陽黑子資料 sunspot.dat(位於 {matlab root}\toolbox\matlab\demos\ 之下),並請用你覺得最適合的方式,來尋找數學模型,並畫出曲線擬合結果。

Part 2

  1. Difference between interpolaton and data fitting: What is the key difference between interpolation and data fitting?
  2. Polynomial order for prediction: Please explain briefly how come the increase in the polynomial order for polynomial fitting does not always lead to a better prediction.
  3. Quadratic form: Express the following equation in quadratic form $\mathbf{x}^TA\mathbf{x}$, where $\mathbf{x}=[x, y, z]$ and $A$ is symmetric: $$x^2+2y^2+3z^2+4xy+6yz+8xz$$
    Solution: $$ x^2+2y^2+3z^2+4xy+6yz+8xz = \left[\begin{matrix} x\\ y\\ z\\ \end{matrix}\right]^T \left[\begin{matrix} 1 & 2 & 4\\ 2 & 2 & 3\\ 4 & 3 & 3\\ \end{matrix}\right] \left[\begin{matrix} x\\ y\\ z\\ \end{matrix}\right] $$
  4. Gradients: Give the simplest matrix format of the following expressions
    1. $\nabla \mathbf{c}^T\mathbf{x}$
    2. $\nabla \mathbf{x}^T\mathbf{x}$
    3. $\nabla \mathbf{c}^TA\mathbf{x}$
    4. $\nabla \mathbf{x}^TA\mathbf{c}$
    5. $\nabla \mathbf{x}^TA\mathbf{x}$ (where $A$ is symmetric)
    6. $\nabla \mathbf{x}^TA\mathbf{x}$ (where $A$ is asymmetric)
    7. $\nabla (\mathbf{x}^TA\mathbf{x}+\mathbf{b}^T\mathbf{x}+\mathbf{c})$ (where $A$ is symmetric)

    Solution:
    1. $\nabla \mathbf{c}^T\mathbf{x} = \mathbf{c}$
    2. $\nabla \mathbf{x}^T\mathbf{x} = 2\mathbf{x}$
    3. $\nabla \mathbf{c}^TA\mathbf{x} = A^T\mathbf{c}$
    4. $\nabla \mathbf{x}^TA\mathbf{c} = A \mathbf{c}$
    5. $\nabla \mathbf{x}^TA\mathbf{x} = 2A\mathbf{x}$ (where $A$ is symmetric)
    6. $\nabla \mathbf{x}^TA\mathbf{x} = (A+A^T)\mathbf{x}$
    7. $\nabla (\mathbf{x}^TA\mathbf{x}+\mathbf{b}^T\mathbf{x}+\mathbf{c}) = 2A\mathbf{x}+\mathbf{b}$ (where $A$ is symmetric)
  5. Derivation of LSE: Use matrix notations to prove that the least-squares estimate to $A\mathbf{\theta}=\mathbf{b}$ is $\hat{\mathbf{\theta}} = (A^TA)^{-1}A^T\mathbf{b}$.
  6. Derivation of weighted LSE: Sometimes we want to assign different weights to different input/output pairs for data fitting. Under this condition, the objective function for solving $A\mathbf{x}=\mathbf{b}$ can be expressed as $e(\mathbf{x})=(A\mathbf{x}-\mathbf{b})^TW(A\mathbf{x}-\mathbf{b})$, where $W$ is a diagonal matrix of weights. Please derive the weighted least-squares estimate to $A\mathbf{x}=\mathbf{b}$ based on the given objective function.
  7. Find a point that has the minimum total squared distance to 3 lines: Given 3 lines on a plane: $$ \left\{ \begin{matrix} x & = & 0\\ y & = & 0\\ 4x+3y & = & 12\\ \end{matrix} \right. $$ You need to find a point P that has the minimum total squared distance to these 3 lines. You can formulate this problem into a least-squares problem by noting the formula of the distance between a point and a line. In particular, the total squared distance can be formulate as $E(\theta) = ||A\theta-\mathbf{b}||^2$ where $A$ and $\mathbf{b}$ are known, while $\theta$ is unknown (the point to be found), and the solution can be obtain from MATLAB by left division.
    1. Give the value of $A$ and $\mathbf{b}$ for solving this problem.
    2. When P is found, what is the distance to each side of the triangle (formed by the 3 lines)?
      Hint: This can be solved by Cauchy-Schartz inequality.
  8. Distance of a point to a hyperplane:
    1. Given a point $P=[x_0, y_0]$ in a 2D plane, what is the shortest distance between P and a hyperplane $L: ax+by+c=0$?
    2. Given a point $P=[x_0, y_0, z_0]$ in a 3D space, what is the shortest distance between P and a hyperplane $L: ax+by+cz+d=0$?
    3. Given a point $P=[x_1, x_2, ..., x_d]$ in a d-dim space, prove that the shortest distance between P and a hyperplane $L: a_0+\sum_{i=1}^d a_i x_i=0$ is given by $$ \frac{|a_0+\sum_{i=1}^d a_i x_i|}{\sqrt{\sum_{i=1}^d a_i^2}} $$
  9. Find a point that has the minimum total squared distance to a set of lines or planes: This problem is an extension to the previous one. In other words, the problem can be formulated as a least-squares problem where the point P can be found by $\mathbf{b}$ left divided by $A$.
    1. You need to find a point P that has the minimum total squared distance to a set of $n$ lines: $$ \left\{ \begin{matrix} a_1x+b_1y+c_1=0\\ a_2x+b_2y+c_2=0\\ ...\\ a_nx+b_ny+c_n=0\\ \end{matrix} \right. $$ Then what are the formulas for $A$ and $\mathbf{b}$?
    2. You need to find a point P that has the minimum total squared distance to a set of $n$ planes: $$ \left\{ \begin{matrix} a_1x+b_1y+c_1z+d_1=0\\ a_2x+b_2y+c_2z+d_2=0\\ ...\\ a_nx+b_ny+c_nz+d_n=0\\ \end{matrix} \right. $$ Then what are the formulas for $A$ and $\mathbf{b}$?
  10. Function for line fitting via LSE: Write a function lineFitViaLse.m to perform line fitting via least-squares estimate ("left division" or "right division" in MATLAB), with the following usage:
    theta = lineFitViaLse(X),
    where X is a 2-by-n dataset matrix, with each column being an observation. In other words, for a column in X, the second element is the output while the first element is the input part of the observation. You function should return a column vector $\theta=\left[\begin{matrix}a\\b\end{matrix}\right]$, where the fitting line can be expressed as $y=ax+b$.

    Test case: planeFitViaLse(X) returns [0.5323; 10.1155] when

    X=[38 93 34 68 33 3 58 87 11 22;25.5 61.2 31.5 43.8 29.5 7.9 38.2 57.6 18.4 25.5].
    If you plot the line, the result will be like this:

  11. Function for plane fitting via LSE: Write a function planeFitViaLse.m to perform plane fitting via least-squares estimate ("left division" or "right division" in MATLAB), with the following usage:
    theta = planeFitViaLse(X),
    where X is a 3-by-n dataset matrix, with each column being an observation. In other words, for a column in X, the last element is the output while all the other elements are the input part of the observation. You function should return a column vector $\theta=\left[\begin{matrix}a\\b\\c\end{matrix}\right]$, where the fitting plane can be expressed as $y=ax_1+bx_2+c$.

    Test case: planeFitViaLse(X) returns [2; 1; 3] when

    X=[38 93 34 68 33 3 58 87 11 22;69 43 42 92 60 57 6 82 1 83;148 232 113 231 129 66 125 259 26 130].
  12. Function for min. total squared distance to a set of planes: Write a function minSquaredDist2planes(P) to return a point in a 3D space that has the shortest total squared distance to a set of m planes defined by a 4-by-m matrix P, where the $i$th column represents a plane of p(1,i)*x+p(2,i)*y+p(3,i)*z+p(4,i)=0.

    Test cases:

    • minSquaredDist2planes([1 6 5 2 1;1 7 7 1 0;1 1 9 1 7;0 7 1 -5 -7]) returns [4.3136; -4.5380; 0.5579].
    Note that your function should use least-squares estimate ("left division" or "right division" in MATLAB) to find the answer directly.

    Here is a test script which can be run with minSquaredDist2planes.p:

    Example 1: 10-曲線擬合與迴歸分析/minSquaredDist2planes01.mp1=minSquaredDist2planes([1 6 5;7 7 7;1 1 4;6 7 5]) p2=minSquaredDist2planes([1 6 5 2 1;1 7 7 1 0;1 1 9 1 7;0 7 1 -5 -7]) p1 = -0.2000 -0.9143 0.6000 p2 = 4.3136 -4.5380 0.5579

  13. Transform nonlinear models into linear ones: Transform the following nonlinear models into linear ones. Be sure to express the original nonlinear parameters in terms of the linear parameters after transformation.
    1. $y=\frac{ax}{x^2+b^2}$
    2. $(x-a)^2+(y-b)^2=c^2$
    3. $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$
    4. $y=ae^{\left(-\frac{x-c}{b}\right)^2}$
    5. All the nonlinear models that can be converted into a linear ones (as listed in the text or slides).
  14. Hybrid method for data fitting: Please describe how to use the hybrid method for data fitting of the model $y=ae^{bx}+ce^{dx}$, and explain the method's advantages.
  15. Hybrid method after transformation for data fitting: The equation for a general ellipse model can be expressed as follows: $$ \left(\frac{x-c}{a}\right)^2+\left(\frac{y-d}{b}\right)^2=1 $$ where a, b, c, and d are 4 parameters of the model.
    1. In order to use separable least squares for curve fitting, we need to transform the original model into the one with as many linear parameters as possible. Please transform the above model, and specify the linear and nonlinear parameters after transformation.
    2. Please describe how you approach the curve fitting problem after the transformation.
  16. Order selection for polynomial fitting based on leave-one-out cross validation: In this exercise, you need to write a MATLAB function that can select the order of a fitting polynomial based on LOOCV (leave-one-out cross validation) described in the class. More specifically, you need to write a function polyFitOrderSelect.m with the following input/output format:
    [bestOrder, vRmse, tRmse]=polyFitOrderSelect(data, maxOrder)
    where
    • "data" is the single-input-single-output dataset for the problem. The first row is the input data, while the second row is the corresponding output data.
    • "maxOrder" is the maximum order to be tried by the function. Note that maxOrder should not be greater than size(data,2)-1. (Why?)
    • "bestOrder" is the order that generates the minimum validation RMSE (root mean squared error).
    • "vRmse" is the vector of validation RMSE. (The length of vRmse is maxOrder+1.)
    • "tRmse" is the vector of training RMSE. (The length of tRmse is maxOrder+1.)
    Hint:
    • Note that to alleviate the problem of truncation and round-off error, you need to perform z-normalization on the input part of the dataset. More specifically, to follow the spirit of LOOCV, you need to find $\mu$ and $\sigma$ based on the training set only, and then use them to normalize both the training and validation sets.
    • If you don't do z-normalization, sometimes you get the right answers, sometimes you don't. This is simply because the data is generated randomly in our judge system. That is, z normalization is likely to give you a more reliable answer to polynomial fitting.
    • For a specific polynomial order, LOOCV needs to use one entry as the validation set and all the others as the training set for each iteration. The final RMSE is defined as the square root of SSE (sum of squared error) divided by data count. As a result, for a complete loop of LOOCV, the data counts for validation SSE and training SSE are $n$ and $n(n-1)$, respectively, where $n$ is the size of the whole dataset. (Check out the following code template, where dataCount is $n$.)
    • Here is a code template:

      10-曲線擬合與迴歸分析/polyFitOrderSelectTemplate.mfunction [bestOrder, vRmse, tRmse]=polyFitOrderSelect(data, maxOrder) %polyFitOrderSelect: Polynomial order selection based on LOOCV (leave-one-out cross validation) % % Usage: [bestOrder, vRmse, tRmse]=polyFitOrderSelect(data, maxOrder) dataCount=size(data, 2); % Count of data pairs orders=0:maxOrder; % All possible orders orderCount=length(orders); % Count of orders for test tRmse=zeros(orderCount, 1); % Training RMSE (root mean squared error) vRmse=zeros(orderCount, 1); % Validation RMSE (root mean squared error) tSse=zeros(dataCount, 1); % SSE (sum of squared error) of training data of LOOCV for a specific order vSse=zeros(dataCount, 1); % SSE (sum of squared error) of validation data of LOOCV for a specific order for i=1:orderCount order=orders(i); for j=1:dataCount % Perform LOOCV vData=data(:,j); % Validation data (which is held out) tData=data; tData(:,j)=[]; % Training data mu= ... % Find mu and sigma of vData sigma=... % Find mu and sigma of vData tData(1,:)=... % Perform z-normalization on the input part of tData vData(1,:)=... % Perform z-normalization on the input part of vData (using the mu and sigma from tData) ... tSse(j)=... % Compute SSE of tData vSse(j)=... % Compute SSE of vData end tRmse(i)=sqrt(sum(tSse)/(dataCount*(dataCount-1))); vRmse(i)=sqrt(sum(vSse)/dataCount); end [minValue, minIndex]=min(vRmse); bestOrder=orders(minIndex);

    Here is a test example:

    1. Example 2: 10-曲線擬合與迴歸分析/polyFitOrderSelectTest.m%% Generate dataset fcn=@(x)sin(x)+exp(-x/5); %ezplot(fcn, [-10 10]); dataCount=50; % No of data pairs x=20*rand(1,dataCount)-10; y=fcn(x)+0.2*randn(1,dataCount); %plot(x, y, 'marker', 'o', 'linestyle', 'none', 'color', 'r'); data=[x; y]; fprintf('data=%s\n', mat2str(data)); %% Perform model complexity determination based on LOOCV (leave-one-out cross validation) maxOrder=20; [orderBest, vRmse, tRmse]=polyFitOrderSelect(data, maxOrder); fprintf('orderBest=%g\n', orderBest); fprintf('vRmse=%s\n', mat2str(vRmse)); fprintf('tRmse=%s\n', mat2str(tRmse)); %% Plot the result figure; plot(0:maxOrder, tRmse, 'b.-', 0:maxOrder, vRmse, 'g.-'); line(orderBest, vRmse(orderBest+1), 'color', 'r', 'marker', 'o'); set(gca, 'yscale', 'log'); legend('Training RMSE', 'Validation RMSE', 'Min. of Validation RMSE', 'location', 'best'); xlabel('Orders of fitting polynomials'); ylabel('RMSE'); %% Polynomial fitting based on the selected order x=data(1,:); y=data(2,:); xNorm=(x-mean(x))/std(x); poly=polyfit(xNorm, y, orderBest); xDense=linspace(-10, 10); yDense=polyval(poly, (xDense-mean(x))/std(x)); figure; plot(xDense, yDense, x, y, 'o');data=[3.18218131111339 -1.18762258702026 -3.03227653248418 -0.0677135078479463 3.96579906729307 -9.54507900094835 -9.15307187695309 1.1814789810637 4.18585308504255 7.89958927370291 -9.91998242990989 6.58363817544357 5.17592273668605 -2.73528359345039 -4.1464682126446 -2.87402882090714 5.01173587069482 8.77179556040694 7.73822081554785 -1.52338467759702 -7.19254203863555 -8.86814898006054 0.0897452630148408 1.00965062158236 9.02838409955871 -9.32706281830323 -4.85351571957086 -2.23373302212051 0.115613784777961 -6.67197717254632 -4.3878130317223 9.18821242707169 -3.51286792188582 -1.7656104401137 -1.93479124962791 -4.68252990102949 -5.22761862732236 9.79628657351691 9.06749886804899 5.13677786805179 -4.96667747161248 -8.48954300083939 3.24061176905699 -8.15269135410554 -8.25304696708314 4.39637036816794 -2.6737532720031 8.86657350663039 1.73547716608639 8.1647689095997;0.232899224213904 -0.0799433849165456 1.75304652060637 0.965036214032515 -0.385559790812596 6.99852715603427 6.0270974255789 1.68424710388782 -0.908233825694012 1.26542671290538 7.92506297864626 0.644196484422006 -0.118952701591702 1.4011059730042 2.99401919850115 1.58978066562779 -0.32281764647044 0.812025133830034 1.50277229803285 0.374514340178889 3.24113919552416 5.34215044768062 1.05530044714563 1.97113254752229 0.359590487802781 6.28781181151787 3.49653610583923 0.664648421302197 1.01417111012126 3.32574948921258 3.33323685892824 0.256653185628986 2.28906507986727 -0.0132796501710816 0.29109020727861 3.55332804053328 3.98729140190471 -0.217259508108903 0.296299355833338 -0.474021837143569 3.47449926131264 4.61548837925692 0.542091669381333 4.22688023641731 4.34032409554792 -0.65072035931881 0.989650754757774 0.94725080568874 1.25004508759507 1.29810080749568] orderBest=13 vRmse=[2.15109313904129;1.33601776951585;0.800571444344469;0.804608903299207;0.833965253687948;0.765212906265815;0.784707626871231;0.838974203051763;1.17115780810961;0.446320785338168;0.689423282339673;0.526100702413789;0.746205207850818;0.221639227187878;0.345743933556517;0.463895437321051;0.422505768291129;1.67081491099868;0.748204986112455;3.65476994470628;3.5554528196574] tRmse=[2.10763223194116;1.27365745614921;0.749481465950527;0.739797257761671;0.739359231150419;0.69028094001186;0.689340878014715;0.533583051816713;0.531317069184323;0.257111974541123;0.25676110949832;0.177871637753501;0.176998409274441;0.156614139472593;0.155361001941745;0.155062163199514;0.154555341212791;0.152201930331697;0.147444402107002;0.145711282615697;0.138464541035888]

  17. Origin-centered ellipse fitting: In this exercise, we will transform a nonlinear ellipse model (centered at the origin) into a linear one, and use it for data fitting.
    1. The equation for a ellipse centered at the origin can be expressed as follows: $$ \left(\frac{x}{a}\right)^2+\left(\frac{y}{b}\right)^2=1 $$ Apparently there are 2 nonlinear parameters a and b in the above equation. Please transform the above equation into a model with linear parameters. In particular, you need to express the original parameters in terms of the linear parameters.
    2. We have a sample dataset of 100 points which are located near a ellipse (centered at the origin) in a 2D plane. The data can be plotted as follows:

      Example 3: 10-曲線擬合與迴歸分析/goShowEllipseData.mload ellipseData.txt x=ellipseData(:,1); y=ellipseData(:,2); plot(x, y, '.'); axis image

      If your linear model is the same as mine, you should be able to plot the model as well as the dataset, as shown next:

  18. Circle fitting: A circle can be described as $$(x-a)^2+(y-b)^2=r^2,$$ where $(a, b)$ is the center and $r$ is the radius of the circle. We can transform the circle model into a linear one by noting that: $$ \left[ 2x, 2y, 1\right] \left[ \begin{matrix}a\\b\\r^2-a^2-b^2\end{matrix} \right] =x^2+y^2 $$ Write a function circleFit.m with the following usage:
    output = circleFit(X, method)
    where
    • X: an 2-by-n dataset matrix, with each column being an observation.
    • method: 1 for using the above transformation method, 2 for using the transformation method plus "fminsearch".
    • output: a column vector of the derived $[a, b, r]^T$.
    Note that when "fminsearch" is used, the objective function should be defined as follows: $$ J(X; a, b, r)=\sum_{i=1}^n(\|\mathbf{x}_i-[a, b]^T\|-r)^2, $$ where $\mathbf{x}_i$ is the $i$th column of the dataset matrix X, and $\|\mathbf{x}_i-[a, b]^T\|$ is the distance from $\mathbf{x}_i$ to the circle's center.

    Here are two test examples which can be run with circleFit.p:

    1. method=1:

      Example 4: 10-曲線擬合與迴歸分析/circleFit01.mdata=[12 -5 13 -3 -8 3 7 -4 7 -4 -4 -1 -5 2 13 -5 -6 5 -5 -8 -6 -7 2 9 -5 -4 -5 8 4 -1 12 1 7 11 11 -1 9 0 -5 14 -3 -8 -3 12 -1 -1 13 7 -6 -7 12 12 -8 10 6 14 -7 6 10 12 8 14 7 8 -1 10 9 13 3 13 -7 12 7 13 -5 -7 -2 9 10 -6 7 7 14 10 9 -6 11 -3 12 -6 8 12 9 10 11 11 1 1 -6 -4; 3 13 6 1 9 16 15 1 17 13 -1 -3 9 17 12 5 3 -2 0 7 6 10 15 -4 8 -1 0 14 17 16 12 -3 -3 6 11 -3 14 16 13 9 -2 6 12 7 -4 15 5 -3 12 4 7 5 4 2 -1 6 10 16 0 7 1 10 15 -1 17 0 14 5 16 9 0 1 -1 0 12 2 -1 16 0 13 -2 15 7 1 -2 11 11 3 6 10 15 0 15 7 0 4 -4 -1 5 1]; theta=circleFit(data, 1); fprintf('theta=%s\n', mat2str(theta)); difVec=data-repmat(theta(1:2), 1, size(data,2)); J=sum((sqrt(sum(difVec.^2))-theta(3)).^2); % Compute the objective function fprintf('J=%g (Obj. function by method 1)\n', J); % Plotting t=linspace(0, 2*pi); x1=theta(1)+theta(3)*cos(t); y1=theta(2)+theta(3)*sin(t); plot(data(1,:), data(2,:), '.', x1, y1, 'b'); axis image legend('Sample data', 'Method=1', 'location', 'northOutside', 'orientation', 'horizontal'); theta=[2.93950025731577;6.59061403290086;9.87460034367112] J=118.162 (Obj. function by method 1)

    2. method=2;

      Example 5: 10-曲線擬合與迴歸分析/circleFit02.mdata=[12 -5 13 -3 -8 3 7 -4 7 -4 -4 -1 -5 2 13 -5 -6 5 -5 -8 -6 -7 2 9 -5 -4 -5 8 4 -1 12 1 7 11 11 -1 9 0 -5 14 -3 -8 -3 12 -1 -1 13 7 -6 -7 12 12 -8 10 6 14 -7 6 10 12 8 14 7 8 -1 10 9 13 3 13 -7 12 7 13 -5 -7 -2 9 10 -6 7 7 14 10 9 -6 11 -3 12 -6 8 12 9 10 11 11 1 1 -6 -4; 3 13 6 1 9 16 15 1 17 13 -1 -3 9 17 12 5 3 -2 0 7 6 10 15 -4 8 -1 0 14 17 16 12 -3 -3 6 11 -3 14 16 13 9 -2 6 12 7 -4 15 5 -3 12 4 7 5 4 2 -1 6 10 16 0 7 1 10 15 -1 17 0 14 5 16 9 0 1 -1 0 12 2 -1 16 0 13 -2 15 7 1 -2 11 11 3 6 10 15 0 15 7 0 4 -4 -1 5 1]; theta1=circleFit(data, 1); theta2=circleFit(data, 2); fprintf('Method 1:\n'); fprintf('theta1=%s\n', mat2str(theta1)); difVec=data-repmat(theta1(1:2), 1, size(data,2)); J1=sum((sqrt(sum(difVec.^2))-theta1(3)).^2); % Compute the objective function fprintf('J1=%g (Obj. function)\n', J1); fprintf('Method 2:\n'); fprintf('theta2=%s\n', mat2str(theta2)); difVec=data-repmat(theta2(1:2), 1, size(data,2)); J2=sum((sqrt(sum(difVec.^2))-theta2(3)).^2); % Compute the objective function fprintf('J2=%g (Obj. function)\n', J2); % Plotting t=linspace(0, 2*pi); x1=theta1(1)+theta1(3)*cos(t); y1=theta1(2)+theta1(3)*sin(t); x2=theta2(1)+theta2(3)*cos(t); y2=theta2(2)+theta2(3)*sin(t); plot(data(1,:), data(2,:), '.', x1, y1, 'b', x2, y2, 'm'); axis image legend('Sample data', 'Method=1', 'Method=2', 'location', 'northOutside', 'orientation', 'horizontal'); Method 1: theta1=[2.93950025731577;6.59061403290086;9.87460034367112] J1=118.162 (Obj. function) Method 2: theta2=[2.92545050062965;6.66709328154231;9.81802966418014] J2=117.542 (Obj. function)

  19. Ball fitting: A ball can be described as $$(x-a)^2+(y-b)^2+(z-c)^2=r^2,$$ where $(a, b, c)$ is the center and $r$ is the radius of the ball. We can transform the ball model into a linear one by noting that: $$ \left[ 2x, 2y, 2z, 1\right] \left[ \begin{matrix}a\\b\\c\\r^2-a^2-b^2-c^2\end{matrix} \right] =x^2+y^2+z^2 $$ Write a function ballFit.m with the following usage:
    output = ballFit(X, method)
    where
    • X: an 3-by-n dataset matrix, with each column being an observation.
    • method: 1 for using the above transformation method, 2 for using the transformation method plus "fminsearch".
    • output: a column vector of the derived $[a, b, c, r]^T$.
    Note that when "fminsearch" is used, the objective function should be defined as follows: $$ J(X; a, b, c, r)=\sum_{i=1}^n(\|\mathbf{x}_i-[a, b, c]^T\|-r)^2, $$ where $\mathbf{x}_i$ is the $i$th column of the dataset matrix X, and $\|\mathbf{x}_i-[a, b, c]^T\|$ is the distance from $\mathbf{x}_i$ to the ball's center.
  20. Ellipse fitting: In this exercise, you need to view a nonlinear ellipse model into a model with both linear and nonlinear parameters. And then we shall empoly LSE (least-squares estimate) for linear parameters and fminsearch for nonlinear parameters. The equation for a general ellipse model can be expressed as follows: $$ \left(\frac{x-c_x}{r_x}\right)^2+\left(\frac{y-c_y}{r_y}\right)^2=1, $$ where the parameter set is $\{c_x, c_y, r_x, r_y\}$. In particular, it is rather easy to reorganize the model and treat $\{c_x, c_y\}$ as nonlinear parameters, while $\{r_x, r_y\}$ can be identified via LSE which minimize the following SSE (sum of squared error): $$ sse(data; c_x, c_y, r_x, r_y)=\sum_{i=1}^n \left[\left(\frac{x_i-c_x}{r_x}\right)^2+\left(\frac{y_i-c_y}{r_y}\right)^2-1\right]^2, $$ where $data=\{(x_i, y_i)|i=1\cdots n\}$ is the sample dataset.

    Write a function ellipseFit.m with the following I/O formats to perform ellipse fitting:

    [theta, sse]=ellipseFit(data);
    where
    • Inputs:
      • data: the sample dataset packed as a matrix of size $2 \times n$, with column $i$ being $(x_i, y_i)^T$
    • Outputs:
      • theta: $\{c_x, c_y, r_x, r_y\}$, which is the identified parameter sets, with the first two identified by fminsearch (with the default optimization options) and the second two by LSE (on the transformed linear model). Note that when using fminsearch, the initial guess of the center should be the average of the dataset.
      • sse: SSE (sum of squared error)
    To make your program modular, the above function should call a local function (reside in the same file) to compute the SSE, as shown next:
    [sse, radius]=sseOfEllipseFit(center, data);
    where
    • Inputs:
      • center: $[c_x, c_y]$
      • data: the sample dataset packed as a matrix of size $2 \times n$, with row $i$ being $(x_i, y_i)^T$
    • Outputs:
      • sse: SSE
      • radius: $[r_x, r_y]$, which is identified by LSE.

    Here is a code template:

    10-曲線擬合與迴歸分析/ellipseFitTemplate.mfunction [theta, sse]=ellipseFit(data) center0=... % Initial guess of the center center=fminsearch(@(x)sseOfEllipseFit(x, data), center0); % Use fminsearch to find center [sse, radius]=sseOfEllipseFit(center, data); % Use sseOfEllipseFit to obtain the final sse & radius theta=[center, radius]; % Function that returns SSE and the linear parameters function [sse, radius]=sseOfEllipseFit(center, data) ...

    We have a sample dataset of 100 points which are located near a ellipse in a 2D plane. Here is a test example for ellipse fitting:

    Example 6: 10-曲線擬合與迴歸分析/ellipseFit01.m%% Load dataset data=load('ellipseData02.txt'); %% Perform ellipse fitting [theta, sse]=ellipseFit(data); fprintf('data=%s\n', mat2str(data)); fprintf('theta=%s\n', mat2str(theta)); fprintf('sse=%s\n', mat2str(sse)); %% Plot the result t=linspace(0, 2*pi); x=theta(3)*cos(t)+theta(1); y=theta(4)*sin(t)+theta(2); plot(data(:,1), data(:,2), '.', x, y); axis image data=[6.943669 6.956414;3.863372 -3.179822;7.479049 0.035599;2.72278 -0.759215;7.504685 -2.473687;6.136369 7.186511;5.142204 -3.654461;3.431942 6.87721;2.568781 7.470799;6.485244 7.416331;6.082399 -3.00738;2.785397 -3.078278;4.584762 -5.29201;5.982497 -2.849705;7.702717 4.262289;7.88278 0.82341;2.028964 1.259867;5.028364 -3.10091;4.775562 -5.806656;6.543223 -1.670509;6.624567 7.840587;1.970161 3.700347;2.789771 -3.215892;7.752161 -0.877497;6.487453 -2.908464;7.441706 -0.457024;2.416857 -2.0069;2.375129 0.314316;6.833214 0.025482;7.97412 2.808571;7.21142 -1.487212;2.384697 4.629564;7.906399 1.642699;4.940312 -2.527724;6.739861 7.652989;6.851925 6.204898;2.594921 -1.592556;4.975183 7.767815;3.6409 5.898707;2.633963 2.589532;2.49532 5.282087;2.734253 5.471611;2.676895 -2.39543;6.404217 6.354402;6.045036 5.734545;7.389045 4.839189;3.847061 7.538169;5.348442 -5.016384;5.392913 8.420645;4.85342 -4.261808;4.13112 -3.71835;6.439174 -4.790832;6.289042 -4.058555;4.159888 8.188762;3.995854 6.797436;2.060171 1.296461;3.859529 9.318816;6.470809 -2.428759;5.954352 -2.537192;2.234747 2.257185;4.664722 9.155501;3.737271 -3.040276;5.415394 9.919628;2.142557 2.717622;2.785038 5.297952;2.068085 2.577277;7.981283 2.141722;5.088078 -4.038877;7.697508 1.723183;5.325731 7.708451;2.147194 0.637131;7.714925 3.518716;4.977362 -4.548395;2.657172 -2.530546;6.720098 -2.750229;7.823925 0.983773;7.789227 0.869608;2.406499 5.465912;8.124483 3.034249;1.969186 0.335981;5.72599 7.708082;5.599573 9.14685;3.67944 8.800705;7.517902 5.434984;4.972658 -2.306801;4.948914 -3.524008;2.052419 0.756242;3.470402 9.295346;6.444968 -1.679208;2.204346 0.961182;8.002657 0.820589;7.120281 -2.280023;3.319414 6.590761;2.063397 2.097367;3.419272 4.905064;3.004756 -2.820644;5.719447 -3.933177;4.939946 -3.84971;7.460787 4.473031;6.29656 -2.860855] theta=[4.93283680981251 2.20138623171132 2.88208225225985 6.56980819637876] sse=5.29153300049955

  21. Ellipse fitting via coordinate optimization: Repeat the previous exercise using coordinate optimization. In other words, the parameters of the ellipse model can be divided into two disjoint sets. For each of the set, you can derive a linear model with respect to the set.
    1. Guess good values of $[r_x, r_y]$ and use the linear model of $[c_x, c_y]$ to obtain $[c_x, c_y]$. What is the linear model? What is the SSE (sum of squared error) in terms of the original model?
    2. Guess good values of $[r_x, r_y]$ and use the linear model of $[c_x, c_y]$ to obtain $[c_x, c_y]$. What is the linear model? What is the SSE in terms of the original model?
    3. Can you use coordinate optimization to find $[c_x, c_y, r_x, r_y]$? Why or why not? Please write a script to verify your answer.

    MATLAB程式設計:進階篇